Goto

Collaborating Authors

 substitution function


Multivariate Bayesian Last Layer for Regression: Uncertainty Quantification and Disentanglement

Wang, Han, Kawasaki, Eiji, Damblin, Guillaume, Daniel, Geoffrey

arXiv.org Machine Learning

We present new Bayesian Last Layer models in the setting of multivariate regression under heteroscedastic noise, and propose an optimization algorithm for parameter learning. Bayesian Last Layer combines Bayesian modelling of the predictive distribution with neural networks for parameterization of the prior, and has the attractive property of uncertainty quantification with a single forward pass. The proposed framework is capable of disentangling the aleatoric and epistemic uncertainty, and can be used to transfer a canonically trained deep neural network to new data domains with uncertainty-aware capability.


Integral Mixabilty: a Tool for Efficient Online Aggregation of Functional and Probabilistic Forecasts

Korotin, Alexander, V'yugin, Vladimir, Burnaev, Evgeny

arXiv.org Machine Learning

In this paper we extend the setting of the online prediction with expert advice to function-valued forecasts. At each step of the online game several experts predict a function and the learner has to efficiently aggregate these functional forecasts into one a single forecast. We adapt basic mixable loss functions to compare functional predictions and prove that these "integral" expansions are also mixable. We call this phenomena integral mixability. As an application, we consider various loss functions for prediction of probability distributions and show that they are mixable by using our main result. The considered loss functions include Continuous ranking probability score (CRPS), Optimal transport costs (OT), Beta-2 and Kullback-Leibler (KL) divergences.


Exp-Concavity of Proper Composite Losses

Kamalaruban, Parameswaran, Williamson, Robert C., Zhang, Xinhua

arXiv.org Machine Learning

The goal of online prediction with expert advice is to find a decision strategy which will perform almost as well as the best expert in a given pool of experts, on any sequence of outcomes. This problem has been widely studied and $O(\sqrt{T})$ and $O(\log{T})$ regret bounds can be achieved for convex losses (\cite{zinkevich2003online}) and strictly convex losses with bounded first and second derivatives (\cite{hazan2007logarithmic}) respectively. In special cases like the Aggregating Algorithm (\cite{vovk1995game}) with mixable losses and the Weighted Average Algorithm (\cite{kivinen1999averaging}) with exp-concave losses, it is possible to achieve $O(1)$ regret bounds. \cite{van2012exp} has argued that mixability and exp-concavity are roughly equivalent under certain conditions. Thus by understanding the underlying relationship between these two notions we can gain the best of both algorithms (strong theoretical performance guarantees of the Aggregating Algorithm and the computational efficiency of the Weighted Average Algorithm). In this paper we provide a complete characterization of the exp-concavity of any proper composite loss. Using this characterization and the mixability condition of proper losses (\cite{van2012mixability}), we show that it is possible to transform (re-parameterize) any $\beta$-mixable binary proper loss into a $\beta$-exp-concave composite loss with the same $\beta$. In the multi-class case, we propose an approximation approach for this transformation.